12 research outputs found

    Autonomous tissue retraction with a biomechanically informed logic based framework

    Get PDF
    Autonomy in robot-assisted surgery is essential to reduce surgeons’ cognitive load and eventually improve the overall surgical outcome. A key requirement for autonomy in a safety-critical scenario as surgery lies in the generation of interpretable plans that rely on expert knowledge. Moreover, the Autonomous Robotic Surgical System (ARSS) must be able to reason on the dynamic and unpredictable anatomical environment, and quickly adapt the surgical plan in case of unexpected situations. In this paper, we present a modular Framework for Robot-Assisted Surgery (FRAS) in deformable anatomical environments. Our framework integrates a logic module for task-level interpretable reasoning, a biomechanical simulation that complements data from real sensors, and a situation awareness module for context interpretation. The framework performance is evaluated on simulated soft tissue retraction, a common surgical task to remove the tissue hiding a region of interest. Results show that the framework has the adaptability required to successfully accomplish the task, handling dynamic environmental conditions and possible failures, while guaranteeing the computational efficiency required in a real surgical scenario. The framework is made publicly available

    Mapping natural language procedures descriptions to linear temporal logic templates: an application in the surgical robotic domain

    Get PDF
    Natural language annotations and manuals can provide useful procedural information and relations for the highly specialized scenario of autonomous robotic task planning. In this paper, we propose and publicly release AUTOMATE, a pipeline for automatic task knowledge extraction from expert-written domain texts. AUTOMATE integrates semantic sentence classifcation, semantic role labeling, and identifcation of procedural connectors, in order to extract templates of Linear Temporal Logic (LTL) relations that can be directly implemented in any sufciently expressive logic programming formalism for autonomous reasoning, assuming some low-level commonsense and domain-independent knowledge is available. This is the frst work that bridges natural language descriptions of complex LTL relations and the automation of full robotic tasks. Unlike most recent similar works that assume strict language constraints in substantially simplifed domains, we test our pipeline on texts that refect the expressiveness of natural language used in available textbooks and manuals. In fact, we test AUTOMATE in the surgical robotic scenario, defning realistic language constraints based on a publicly available dataset. In the context of two benchmark training tasks with texts constrained as above, we show that automatically extracted LTL templates, after translation to a suitable logic programming paradigm, achieve comparable planning success in reduced time, with respect to logic programs written by expert programmer

    Position-based modeling of lesion displacement in Ultrasound-guided breast biopsy

    Get PDF
    International audiencePurpose Although ultrasound (US) images represent the most popular modality for guiding breast biopsy, malignant regions are often missed by sonography, thus preventing accurate lesion localization which is essential for a successful procedure. Biomechanical models can support the localization of suspicious areas identified on a pre-operative image during US scanning since they are able to account for anatomical deformations resulting from US probe pressure. We propose a deformation model which relies on position-based dynamics (PBD) approach to predict the displacement of internal targets induced by probe interaction during US acquisition. Methods The PBD implementation available in NVIDIA FleX is exploited to create an anatomical model capable of deforming online. Simulation parameters are initialized on a calibration phantom under different levels of probe-induced deformations, then they are fine-tuned by minimizing the localization error of a US-visible landmark of a realistic breast phantom. The updated model is used to estimate the displacement of other internal lesions due to probe-tissue interaction. Results The localization error obtained when applying the PBD model remains below 11 mm for all the tumors even for input displacements in the order of 30 mm. This proposed method obtains results aligned with FE models with faster computational performance, suitable for real-time applications. In addition, it outperforms rigid model used to track lesion position in US-guided breast biopsies, at least halving the localization error for all the displacement ranges considered. 2 Eleonora Tagliabue et al. Conclusions Position-based dynamics approach has proved to be successful in modeling breast tissue deformations during US acquisition. Its stability, accuracy and real-time performance make such model suitable for tracking lesions displacement during US-guided breast biopsy

    Robust Real-Time Needle Tracking in 2-D Ultrasound Images Using Statistical Filtering

    No full text
    Percutaneous image-guided tumor ablation is a minimally invasive surgical procedure for the treatment of malignant tumors using a needle-shaped ablation probe. Automating the insertion of a needle by using a robot could increase the accuracy and decrease the execution time of the procedure. Extracting the needle tip position from the ultrasound (US) images is of paramount importance for verifying that the needle is not approaching any forbidden regions (e.g., major vessels and ribs), and could also be used as a direct feedback signal to the robot inserting the needle. A method for estimating the needle tip has previously been developed combining a modified Hough transform, image filters, and machine learning. This paper improves that method by introducing a dynamic selection of the region of interest in the US images and filtering the tracking results using either a Kalman filter or a particle filter. Experiments where a biopsy needle has been inserted into a phantom by a robot have been conducted, guided by an infrared tracking system. The proposed method has been accurately evaluated by comparing its estimations with the needle tip's positions manually detected by a physician in the US images. The results show a significant improvement in precision and more than 85% reduction of 95th percentile of the error compared with the previous automatic approaches. The method runs in real time with a frame rate of 35.4 frames/s. The increased robustness and accuracy can make our algorithm usable in autonomous surgical systems for needle insertion

    Robust Real-Time Needle Tracking in 2-D Ultrasound Images Using Statistical Filtering

    No full text
    Percutaneous image-guided tumor ablation is a minimally invasive surgical procedure for the treatment of malignant tumors using a needle-shaped ablation probe. Automating the insertion of a needle by using a robot could increase the accuracy and decrease the execution time of the procedure. Extracting the needle tip position from the ultrasound (US) images is of paramount importance for verifying that the needle is not approaching any forbidden regions (e.g., major vessels and ribs), and could also be used as a direct feedback signal to the robot inserting the needle. A method for estimating the needle tip has previously been developed combining a modified Hough transform, image filters, and machine learning. This paper improves that method by introducing a dynamic selection of the region of interest in the US images and filtering the tracking results using either a Kalman filter or a particle filter. Experiments where a biopsy needle has been inserted into a phantom by a robot have been conducted, guided by an infrared tracking system. The proposed method has been accurately evaluated by comparing its estimations with the needle tip's positions manually detected by a physician in the US images. The results show a significant improvement in precision and more than 85% reduction of 95th percentile of the error compared with the previous automatic approaches. The method runs in real time with a frame rate of 35.4 frames/s. The increased robustness and accuracy can make our algorithm usable in autonomous surgical systems for needle insertion

    Robot assisted electrical impedance scanning for tissue bioimpedance spectroscopy measurement

    No full text
    Intraoperative tissue identification is important and frequently required in modern surgical approaches for guiding operation. For this purpose, a novel robot assisted sensing system equipped with a wide-band impedance spectroscope is developed. Without introducing an external sensor probe to the operating site, the proposed system incorporates two robotic instruments for electric current excitation and voltage measurement. Based on the developed measurement strategy and algorithm, the electrical conductivity and permittivity of the tissue region can be calculated. Experiments based on simulation, salines and ex-vivo tissue phantoms are conducted. The experimental results demonstrate that the proposed system has a high measurement accuracy (≥97%). Through a simple support vector machine, a 100% accuracy is achieved for identifying five different tissues. Given the convincing results, the presented sensing system shows great potential in offering effective, fast, and safe tissue inspection

    Towards Hierarchical Task Decomposition using Deep Reinforcement Learning for Pick and Place Subtasks

    Get PDF
    Deep Reinforcement Learning (DRL) is emerging as a promising approach to generate adaptive behaviors for robotic platforms. However, a major drawback of using DRL is the data-hungry training regime that requires millions of trial and error attempts, which is impractical when running experiments on robotic systems. Learning from Demonstrations (LfD) has been introduced to solve this issue by cloning the behavior of expert demonstrations. However, LfD requires a large number of demonstrations that are difficult to be acquired since dedicated complex setups are required. To overcome these limitations, we propose a multi-subtask reinforcement learning methodology where complex pick and place tasks can be decomposed into low-level subtasks. These subtasks are parametrized as expert networks and learned via DRL methods. Trained subtasks are then combined by a high-level choreographer to accomplish the intended pick and place task considering different initial configurations. As a testbed, we use a pick and place robotic simulator to demonstrate our methodology and show that our method outperforms a benchmark methodology based on LfD in terms of sample-efficiency. We transfer the learned policy to the real robotic system and demonstrate robust grasping using various geometric-shaped objects

    Multi-task temporal convolutional networks for joint recognition of surgical phases and steps in gastric bypass procedures

    No full text
    Purpose: Automatic segmentation and classification of surgical activity is crucial for providing advanced support in computer-assisted interventions and autonomous functionalities in robot-assisted surgeries. Prior works have focused on recognizing either coarse activities, such as phases, or fine-grained activities, such as gestures. This work aims at jointly recognizing two complementary levels of granularity directly from videos, namely phases and steps. Methods: We introduce two correlated surgical activities, phases and steps, for the laparoscopic gastric bypass procedure. We propose a multi-task multi-stage temporal convolutional network (MTMS-TCN) along with a multi-task convolutional neural network (CNN) training setup to jointly predict the phases and steps and benefit from their complementarity to better evaluate the execution of the procedure. We evaluate the proposed method on a large video dataset consisting of 40 surgical procedures (Bypass40). Results: We present experimental results from several baseline models for both phase and step recognition on the Bypass40. The proposed MTMS-TCN method outperforms single-task methods in both phase and step recognition by 1-2% in accuracy, precision and recall. Furthermore, for step recognition, MTMS-TCN achieves a superior performance of 3-6% compared to LSTM-based models on all metrics. Conclusion: In this work, we present a multi-task multi-stage temporal convolutional network for surgical activity recognition, which shows improved results compared to single-task models on a gastric bypass dataset with multi-level annotations. The proposed method shows that the joint modeling of phases and steps is beneficial to improve the overall recognition of each type of activity

    Deformation Compensation in Robotically-Assisted Breast Biopsy

    Get PDF
    A major challenge of current breast biopsy procedures is lesion displacement due to needle-tissue interaction, respiration and involuntary motions, possibly causing the needle to miss the target. These deformations are intrinsically ac- counted for when the procedure is performed under ultrasound (US) guidance, but the low US resolution makes target visualization often impossible. By con- trast, MRI-guided biopsies provide high-resolution images with excellent sensi- tivity, but they do not account in any ways for breast deformations. The MRI and Ultrasound Robotic Assisted Biopsy (MURAB) project aims to solve this challenge by the use of a combination of technologies
    corecore